Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-angle head pose estimation method based on optimized LeNet-5 network
ZHANG Hui, ZHANG Nana, HUANG Jun
Journal of Computer Applications    2021, 41 (6): 1667-1672.   DOI: 10.11772/j.issn.1001-9081.2020091427
Abstract311)      PDF (1102KB)(545)       Save
In order to solve the problems that the accuracy is low or the head pose estimation cannot be performed by traditional head pose estimation methods when the key feature points of the face cannot be located due to partial occlusion or too large angle, a multi-angle head pose estimation method based on optimized LeNet-5 network was proposed. Firstly, the depth, the size of the convolution kernel and other parameters of the Convolutional Neural Network (CNN) were optimized to better capture the global features of the image. Then, the pooling layers were improved, and a convolutional operation was used to replace the pooling operation to increase the nonlinear ability of the network. Finally, the AdaBound optimizer was introduced, and the Softmax regression model was used to perform the pose classification training. During the training, hair occlusion, exaggerated expressions and wearing glasses were added to the self-built dataset to increase the generalization ability of the network. Experimental results show that, the proposed method can realize the head pose estimation under multi-angle rotations, such as head up, head down and head tilting without locating key facial feature points, under the occlusion of light, shadow and hair, with the accuracy of 98.7% on Pointing04 public dataset and CAS-PEAL-R1 public dataset, and the average running speed of 22-29 frames per second.
Reference | Related Articles | Metrics
β-distribution reduction based on discernibility matrix in interval-valued decision systems
LI Leitao, ZHANG Nan, TONG Xiangrong, YUE Xiaodong
Journal of Computer Applications    2021, 41 (4): 1084-1092.   DOI: 10.11772/j.issn.1001-9081.2020040563
Abstract301)      PDF (935KB)(317)       Save
At present, the scale of interval type data is getting larger and larger. When using the classic attribute reduction method to process, the data needs to be preprocessed,thus leading to the loss of original information. To solve this problem, a β-distribution reduction algorithm of the interval-valued decision system was proposed. Firstly, the concept and the reduction target of β-distribution of the interval-valued decision system were given, and the proposed related theories were proved. Then, the discernibility matrix and discernibility function of β-distribution reduction were constructed for the above reduction target,and the β-distribution reduction algorithm of the interval-valued decision system was proposed. Finally,14 UCI datasets were selected for experimental verification. On Statlog dataset, when the similarity threshold is 0.6 and the number of objects is 100, 200, 400, 600 and 846 respectively, the average reduction length of the β-distribution reduction algorithm is 1.6, 2.2, 1.4, 2.4 and 2.6 respectively, the average reduction length of the Distribution Reduction Algorithm based on Discernibility Matrix(DRADM) is 2.0, 3.0, 3.0, 4.0 and 4.0 respectively, the average reduction length of the Maximum Distribution Reduction Algorithm based on Discernibility Matrix(MDRADM) is 2.0, 3.0, 3.0, 4.0 and 3.0 respectively. The effectiveness of the proposed β-distribution reduction algorithm is verified by experimental results.
Reference | Related Articles | Metrics
Text sentiment analysis based on sentiment lexicon and context language model
YANG Shuxin, ZHANG Nan
Journal of Computer Applications    2021, 41 (10): 2829-2834.   DOI: 10.11772/j.issn.1001-9081.2020121900
Abstract289)      PDF (696KB)(271)       Save
Word embedding technology plays an important role in text sentiment analysis, but the traditional word embedding technologies such as Word2Vec and GloVe (Global Vectors for word representation) will lead to the problem of single semantics. Aiming at the above problem, a text sentiment analysis model named Sentiment Lexicon Parallel-Embedding from Language Model (SLP-ELMo) based on sentiment lexicon and context language model named Embedding from Language Model (ELMo) was proposed. Firstly, the sentiment lexicon was used to filter the words in the sentence. Secondly, the filtered words were input into the character-level Convolutional Neural Network (char-CNN) to generate the character vector of each word. Then, the character vectors were input into ELMo model for training. In addition, the attention mechanism was added to the last layer of ELMo vector to train the word vectors better. Finally, the word vectors and ELMo vector were combined in parallel and input into the classifier for text sentiment classification. Compared with the existing models, the proposed model achieves higher accuracy on IMDB and SST-2 datasets, which validates the effectiveness of the model.
Reference | Related Articles | Metrics
Interactive liveness detection combining with head pose and facial expression
HUANG Jun, ZHANG Nana, ZHANG Hui
Journal of Computer Applications    2020, 40 (7): 2089-2095.   DOI: 10.11772/j.issn.1001-9081.2019112059
Abstract382)      PDF (1450KB)(348)       Save
In order to prevent photo and video attacks in the face recognition system, an interactive liveness detection algorithm was proposed which combines the head pose and facial expression. Firstly, the number of convolution kernels, network layers, and regularization of VGGNet were adjusted and optimized, and a multi-layer convolutional head pose estimation network was constructed. Secondly, the methods such as global average pooling, local response normalization and convolutional replacement pooling were introduced to improve VGGNet and build an expression recognition network. Finally, the above two networks were fused to realize an interactive liveness detection system, which sends random instructions to users to complete liveness detection in real time. The experimental results show that the proposed head pose estimation network and expression recognition network achieve 99.87% and 99.60% accuracy on CAS-PEAL-R1 dataset and CK+ dataset respectively, and the liveness detection system has the comprehensive accuracy reached 96.70%, the running speed reaches 20-28 frames per second, which make the generalization ability of the system outstanding in the practical application.
Reference | Related Articles | Metrics
Method of semantic entity construction and trajectory control for UAV electric power inspection
REN Na, ZHANG Nan, CUI Yan, ZHANG Rongxue, PANG Xinfu
Journal of Computer Applications    2020, 40 (10): 3095-3100.   DOI: 10.11772/j.issn.1001-9081.2020020198
Abstract327)      PDF (2216KB)(570)       Save
The reasonable control of trajectory is an important factor affecting intelligent decision-making of Unmanned Aerial Vehicle (UAV). Focusing on the local observability and the complexity of upper air of mission environment, a method of semantic entity construction and trajectory control for UAV electric power inspection was proposed. Firstly, a spatial topology network based on entity knowledge of electric power inspection field was built, and the semantic trajectory sequence network about position nodes and its semantic interfaces were generated. Then, based on the result set of similarity measure of spatial topology structures, the security licensing mechanism and reinforcement learning based trajectory control strategy were proposed to realize the UVA electric power inspection on the basis of consensus concept connotation and position structure. Experimental results show that for an example of UAV electric power inspection, the optimal strategy obtained by the proposed method can satisfy the maximum robust performance, and at the same time, the fitness of the target network can stably converge and the physical area coverage is higher than 95% through the reinforcement learning of this method, so that the method provides flight basis for the decision-making of UVA electric power inspection tasks.
Reference | Related Articles | Metrics
Incremental attribute reduction algorithm of positive region in interval-valued decision tables
BAO Di, ZHANG Nan, TONG Xiangrong, YUE Xiaodong
Journal of Computer Applications    2019, 39 (8): 2288-2296.   DOI: 10.11772/j.issn.1001-9081.2018122518
Abstract443)      PDF (1293KB)(216)       Save
There are a large number of dynamically-increasing interval data in practical applications. If the classic non-incremental attribute reduction of positive region is used for reduction, it is necessary to recalculate the positive region reduction of the updated interval-valued datasets, which greatly reduces the computational efficiency of attribute reduction. In order to solve the problem, incremental attribute reduction methods of positive region in interval-valued decision tables were proposed. Firstly, the related concepts of positive region reduction in interval-valued decision tables were defined. Then, the single and group incremental mechanisms of positive region were discussed and proved, and the single and group incremental attribute reduction algorithms of positive region in interval-valued decision tables were proposed. Finally, 8 UCI datasets were used to carry out experiments. When the incremental size of 8 datasets increases from 60% to 100%, the reduction time of classic non-incremental attribute reduction algorithm in the 8 datasets is 36.59 s, 72.35 s, 69.83 s, 154.29 s, 80.66 s, 1498.11 s, 4124.14 s and 809.65 s, the reduction time of single incremental attribute reduction algorithm is 19.05 s, 46.54 s, 26.98 s, 26.12 s, 34.02 s, 1270.87 s, 1598.78 s and 408.65 s, the reduction time of group incremental attribute reduction algorithm is 6.39 s, 15.66 s, 3.44 s, 15.06 s, 8.02 s, 167.12 s, 180.88 s and 61.04 s. Experimental results show that the proposed incremental attribute reduction algorithm of positive region in interval-valued decision tables is efficient.
Reference | Related Articles | Metrics
Train fault identification based on compressed sensing and deep wavelet neural network
DU Xiaolei, CHEN Zhigang, ZHANG Nan, XU Xu
Journal of Computer Applications    2019, 39 (7): 2175-2180.   DOI: 10.11772/j.issn.1001-9081.2018112278
Abstract346)      PDF (981KB)(245)       Save

Aiming at the difficulty of unsupervised feature learning on defect vibration data of train running part, a method based on Compressed Sensing and Deep Wavelet Neural Network (CS-DWNN) was proposed. Firstly, the collected vibration data of train running part were compressed and sampled by Gauss random matrix. Secondly, a DWNN based on improved Wavelet Auto-Encoder (WAE) was constructed, and the compressed data were directly input into the network for automatic feature extraction layer by layer. Finally, the multi-layer features learned by DWNN were used to train multiple Deep Support Vector Machines (DSVMs) and Deep Forest (DF) classifiers respectively, and the recognition results were integrated. In this method DWNN was employed to automatically mine hidden fault information from compressed data, which was less affected by prior knowledge and subjective influence, and complicated artificial feature extraction process was avoided. The experimental results show that the CS-DWNN method achieves an average diagnostic accuracy of 99.16%, and can effectively identify three common faults in train running part. The fault recognition ability of the proposed method is superior to traditional methods such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and deep learning models such as Deep Belief Network (DBN), Stack De-noised Auto-Encoder (SDAE).

Reference | Related Articles | Metrics
Positive region preservation reduction based on multi-specific decision classes in incomplete decision systems
KONG Heqing, ZHANG Nan, YUE Xiaodong, TONG Xiangrong, YU Tianyou
Journal of Computer Applications    2019, 39 (5): 1252-1260.   DOI: 10.11772/j.issn.1001-9081.2018091963
Abstract629)      PDF (1396KB)(416)       Save
The existing attribute reduction algorithms mostly focus on all decision classes in decision systems, but in actual decision process, decision makers may only focus on one or several decision classes in the decision systems. To solve this problem, a theoretical framework of positive region preservation reduction based on multi-specific decision classes in incomplete decision systems was proposed. Firstly, the positive region preservation reduction for single specific decision class in incomplete decision systems was defined. Secondly, the positive region preservation reduction for single specific decision class was extended to multi-specific decision classes, and the corresponding discernibility matrix and function were constructed. Thirdly, with related theorems analyzed and proved, an algorithm of Positive region preservation Reduction for Multi-specific decision classes reduction based on Discernibility Matrix in incomplete decision systems (PRMDM) was proposed. Finally, four UCI datasets were selected for experiments. On Teaching-assistant-evaluation, House, Connectionist-bench and Cardiotocography dataset, the average reduction length of Positive region preservation Reduction based on Discernibility Matrix in incomplete decision systems (PRDM) algorithm is 4.00, 13.00, 9.00 and 20.00 respectively while that of the PRMDM algorithm (with decision classes in the multi-specific decision classes is 2) is 3.00, 8.00, 8.00 and 18.00 respectively. The validity of PRMDM algorithm is verified by experimental results.
Reference | Related Articles | Metrics
Multi-scale attribute granule based quick positive region reduction algorithm
CHEN Manru, ZHANG Nan, TONG Xiangrong, DONGYE Shenglong, YANG Wenjing
Journal of Computer Applications    2019, 39 (12): 3426-3433.   DOI: 10.11772/j.issn.1001-9081.2019049238
Abstract495)      PDF (1131KB)(347)       Save
In classical heuristic attribute reduction algorithm for positive region, the attribute with the maximum dependency degree of the current positive domain should be added into the selected feature attribute subset in each iteration, leading to the large number of iterations and the low efficiency of the algorithm, and making the algorithm hard to be applied in the feature selection of high-dimensional and large-scale datasets. In order to solve the problems, the monotonic relationship between the positive regions in a decision system was studied and the formal description for the Multi-Scale Attribute Granule (MSAG) was given, and a Multi-scale Attribute Granule based Quick Positive Region reduction algorithm (MAG-QPR) was proposed. Each MSAG contains several attributes and can provide a large positive region for the selected feature attribute subset. As a result, adding MSAG in each iteration can reduce the number of the iteration and make the selected feature attribute subset more quickly approach to the positive region resolving ability of the condition attribute universal set. Therefore, the computational efficiency of the heuristic attribute reduction algorithm for positive region is improved. With 8 UCI datasets used for experiments, on the datasets Lung Cancer, Flag and German, the running time acceleration ratios of MAG-QPR to the general improved Feature Selection algorithm based on the Positive Approximation-Positive Region (FSPA-PR), the general improved Feature Selection algorithm based on the Positive Approximation-Shannon's Conditional Entropy (FSPA-SCE), the Backward Greedy Reduction Algorithm for positive region Preservation (BGRAP) and the Backward Greedy Reduction Algorithm for Generalized decision preservation (BGRAG) are 9.64, 15.70, 5.03, 2.50; 3.93, 7.55, 1.69, 4.57; and 3.61, 6.49, 1.30, 9.51 respectively. The experimental results show that, the proposed algorithm MAG-QPR can improve the algorithm efficiency and has better classification accuracy.
Reference | Related Articles | Metrics
Improved particle swarm optimization algorithm based on Hamming distance for traveling salesman problem
QIAO Shen, LYU Zhimin, ZHANG Nan
Journal of Computer Applications    2017, 37 (10): 2767-2772.   DOI: 10.11772/j.issn.1001-9081.2017.10.2767
Abstract751)      PDF (880KB)(533)       Save
An improved Particle Swarm Optimization (PSO) algorithm based on Hamming distance was proposed to solve the discrete problems. The basic idea and process of traditional PSO was retained, and a new speed representation based on Hamming distance was defined. Meanwhile, in order to make the algorithm be more efficient and avoid the iterative process falling into the local optimum, new operators named 2-opt and 3-opt were designed, and the random greedy rule was also used to improve the quality of the solution and speed up the convergence. At the later period of the algorithm, in order to increase the global search ability of the particles in the whole solution space, a part of particles was regenerated to re-explore the solution space. Finally, a number of TSP standard examples were used to verify the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm can find the historical optimal solution for small scale TSP; for large-scale TSP, for example, the city number is more than 100, satisfactory solutions can also be found, and the deviations between the known and the optimal solutions are small, usually within 5%.
Reference | Related Articles | Metrics
Collaborative filtering recommendation method based on improved heuristic similarity model
ZHANG Nan, LIN Xiaoyong, SHI Shenghui
Journal of Computer Applications    2016, 36 (8): 2246-2251.   DOI: 10.11772/j.issn.1001-9081.2016.08.2246
Abstract559)      PDF (977KB)(413)       Save
In order to improve the accuracy and efficiency of collaborative filtering recommendation method, a collaborative filtering recommendation method based on improved heuristic similarity model, namely PSJ, was proposed, which considered the difference of user ratings, the user global rating preferences and the number of common rating items. The Proximity factor of PSJ method used the exponential function to reflect the influence of the difference of user ratings, which avoided the problem of zero divider. The Significance factor of NHSM (New Heuristic Similarity Model) method and the URP (User Rating Preference) factor were merged to build the Significance factor of PSJ method, which makes the computational complexity of the PSJ method be lower than that of NHSM. To improve the recommendation performance in data sparsity conditions, both the variance value of user ratings and user global rating preferences were considered in PSJ method. In experiments, precision and recall of Top- k recommendation were used to evaluate the results. The results show that compard with NHSM, Jaccard algorithm, Adjust COSine similarity (ACOS) algorithm, Jaccard Mean Squared Difference (JMSD) algorithm and Sigmoid function based Pearson Correlation Coefficient method (SPCC), the precision and recall of PSJ method are improved.
Reference | Related Articles | Metrics
Super-resolution image reconstruction algorithm based on image patche iteration and sparse representation
YANG Cunqiang, HAN Xiaojun, ZHANG Nan
Journal of Computer Applications    2016, 36 (2): 521-525.   DOI: 10.11772/j.issn.1001-9081.2016.02.0521
Abstract659)      PDF (830KB)(836)       Save
Concerning the slow reconstruction and the difference among the contents of the image to be reconstructed, an improved super-resolution image reconstruction algorithm based on image patche iteration and sparse representation was proposed. In the proposed method, image patches were firstly divided into three different forms by threshold features, then the three forms were treated separately: during the reconstruction process, Bicubic Interpolation (BI) approach was used for image patches of 4 N×4 N; image patches of 2 N×2 N achieved corresponding high and low resolution dictionary pairs by K-Singular Value Decomposition (K-SVD) algorithm, and then to finish reconstruction using Orthogonal Matching Pursuit (OMP) algorithm; image patches of N× N were divided into smoothing layer and texture layer by Morphological Component Analysis (MCA) algorithm, then to finish reconstruction using OMP with corresponding dictionary pairs of each layer. Compared with the methods based on sparse representation group, MCA, and two-stage multi-frequency-band dictionaries, the proposed algorithm has a significant improvement in subjective visual effect, evaluation index and reconstruction speed. The experimental results show that the proposed algorithm can obtain more details in edge patches and irregular structure regions with better reconstruction effect.
Reference | Related Articles | Metrics
Clustering routing algorithm based on attraction factor and hybrid transmission
ZHAO Zuopeng, ZHANG Nana, HOU Mengting, GAO Meng
Journal of Computer Applications    2015, 35 (12): 3331-3335.   DOI: 10.11772/j.issn.1001-9081.2015.12.3331
Abstract424)      PDF (913KB)(361)       Save
In order to effectively reduce the energy consumption of Wireless Sensor Network (WSN) and extend the life cycle of the network, Low Energy Adaptive clustering Hierarchy (LEACH) and other clustering routing protocols were analyzed. For improving their weaknesses, a Clustering Routing algorithm based on Attraction factor and Hybrid transmission (CRAH algorithm) was proposed. Firstly, in order to solve the problem of unreasonable selection of Cluster Head (CH) nodes, the node residual energy and the node location were combined as a new index of CH nodes selection by adopting the method of weighted sum. Then, the tasks of the CH nodes were reassigned, and new fusion nodes were chosen. The fusion nodes sent data to Base Station (BS) according to a hybrid of single hop and multiple hops, and combined attraction factor and the Dijkstra algorithm to present a new algorithm, Attraction Factor-Dijkstra (AF-DK) algorithm was proposed with the combination of attract factor and Dijkstra algorithm for finding the optimal paths for fusion nodes. The simulation results show that, compared with the protocols of LEACH, LEACH-Centralized (LEACH-C) and Hybrid Energy-Efficient Distributed clustering (HEED), the CRAH algorithm improved the network lifetime by about 51.56%, 47.1% and 42% respectively, and slowed the network energy consumption significantly. The amount of data receiving by Base Station (BS) decreased 69.9% in average. The CRAH algorithm makes CH selection more reasonable, effectively reduces the redundant data in the process of communication, balances the network energy consumption, and extends the life cycle of the network.
Reference | Related Articles | Metrics
Medium access control protocol with network utility maximization and collision avoidance for wireless sensor networks
LIU Tao LI Tianrui YIN Feng ZHANG Nan
Journal of Computer Applications    2014, 34 (11): 3196-3200.   DOI: 10.11772/j.issn.1001-9081.2014.11.3196
Abstract205)      PDF (756KB)(497)       Save

In order to avoid transmission collisions and improve energy efficiency for periodic report Wireless Sensor Network (WSN), a Medium Access Control (MAC) protocol with network utility maximization and collision avoidance called UM-MAC was proposed. UM-MAC used Time Division Multiple Access (TDMA) scheduling mechanism and introduced the utility model into the slot assignment process. A utility maximization problem of joint link reliability and energy consumption optimization based on utility model was put forward. To handle it, a heuristic algorithm was proposed to make the network to quickly find out a slot scheduling strategy which maximize network utility and avoid transmission collisions. Comparison experiments among UM-MAC, S-MAC and CA(Collision Avoidance)-MAC protocols were conducted under networks with different nodes, where UM-MAC got larger network utility and higher average packet successful delivery ratio, the lifetime of UM-MAC was between S-MAC and CA-MAC, while its average transmission delay increased under networks with defferent loads. The simulation results show that UM-MAC can achieve collision avoidance and improve network performance in terms of packet successful delivery ratio and energy efficiency; meanwhile, the TDMA-based protocol is not better than competition-based protocol in low load networks.

Reference | Related Articles | Metrics
Research and security analysis on open RFID mutual authentication protocol
ZHANG Nan ZHANG Jianhua
Journal of Computer Applications    2013, 33 (01): 131-134.   DOI: 10.3724/SP.J.1087.2013.00131
Abstract990)      PDF (613KB)(618)       Save
Considering that Radio Frequency Identification (RFID) system has many security problems because of limited resource and broadcasting transmission, a new improved mutual authentication protocol was put forward. In the protocol, symmetric encryption combined with the random number method was used. It has advantage in balancing the security, efficiency and cost. The protocol can be applied in an open environment which the transmission security between database and reader is not requested necessary. It can improve the mobility and the application range of the reader. BAN logic was used to do the formal analysis and proved that the proposed protocol is safe and reachable. The proposed protocol can effectively solve the security attacks, such as eavesdropping, tracing and replaying.
Reference | Related Articles | Metrics
Network detecting control based on ActiveX technique
ZHANG Nan,LI Zhi-shu,ZHANG Jian-hua
Journal of Computer Applications    2005, 25 (08): 1865-1866.   DOI: 10.3724/SP.J.1087.2005.01865
Abstract684)      PDF (127KB)(993)       Save
A network packet detecting control was designed and realized based on ActiveX technique. By using this control, applications can capture and deal with network packet based on users command in all kinds of programming environments. The control provides programmer a unified interface to develop network applications which need capturing packet. It brings users great convenience.
Related Articles | Metrics
Development and implementation of network sniffer based on data link layer
ZHANG Nan, LI Zhi-shu, ZHANG Jian-hua, LI Qi
Journal of Computer Applications    2005, 25 (05): 1185-1186.   DOI: 10.3724/SP.J.1087.2005.1185
Abstract782)      PDF (85KB)(700)       Save
A method to develop network sniffer based on data link layer was put forward. The sniffer implemented by this method can capture data frame on network by driver of network card. It can be used to capture all kinds of data packets. Compared with those sniffers which capture data packet based on network layer, it is more powerful.
Related Articles | Metrics